Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 134
Filtrar
1.
J Clin Hypertens (Greenwich) ; 26(4): 425-430, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38501749

RESUMO

Previous work comparing safety and effectiveness outcomes for new initiators of angiotensin converting-enzyme inhibitors (ACEi) and thiazides demonstrated more favorable outcomes for thiazides, although cohort definitions allowed for addition of a second antihypertensive medication after a week of monotherapy. Here, we modify the monotherapy definition, imposing exit from cohorts upon addition of another antihypertensive medication. We determine hazard ratios (HR) for 55 safety and effectiveness outcomes over six databases and compare results to earlier findings. We find, for all primary outcomes, statistically significant differences in effectiveness between ACEi and thiazides were not replicated (HRs: 1.11, 1.06, 1.12 for acute myocardial infarction, hospitalization with heart failure and stroke, respectively). While statistical significance is similarly lost for several safety outcomes, the safety profile of thiazides remains more favorable. Our results indicate a less striking difference in effectiveness of thiazides compared to ACEi and reflect some sensitivity to the monotherapy cohort definition modification.


Assuntos
Inibidores da Enzima Conversora de Angiotensina , Hipertensão , Humanos , Inibidores da Enzima Conversora de Angiotensina/efeitos adversos , Anti-Hipertensivos/efeitos adversos , Diuréticos/efeitos adversos , Hipertensão/tratamento farmacológico , Inibidores de Simportadores de Cloreto de Sódio/efeitos adversos , Tiazidas/efeitos adversos
3.
Stat Med ; 43(2): 395-418, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-38010062

RESUMO

Postmarket safety surveillance is an integral part of mass vaccination programs. Typically relying on sequential analysis of real-world health data as they accrue, safety surveillance is challenged by sequential multiple testing and by biases induced by residual confounding in observational data. The current standard approach based on the maximized sequential probability ratio test (MaxSPRT) fails to satisfactorily address these practical challenges and it remains a rigid framework that requires prespecification of the surveillance schedule. We develop an alternative Bayesian surveillance procedure that addresses both aforementioned challenges using a more flexible framework. To mitigate bias, we jointly analyze a large set of negative control outcomes that are adverse events with no known association with the vaccines in order to inform an empirical bias distribution, which we then incorporate into estimating the effect of vaccine exposure on the adverse event of interest through a Bayesian hierarchical model. To address multiple testing and improve on flexibility, at each analysis timepoint, we update a posterior probability in favor of the alternative hypothesis that vaccination induces higher risks of adverse events, and then use it for sequential detection of safety signals. Through an empirical evaluation using six US observational healthcare databases covering more than 360 million patients, we benchmark the proposed procedure against MaxSPRT on testing errors and estimation accuracy, under two epidemiological designs, the historical comparator and the self-controlled case series. We demonstrate that our procedure substantially reduces Type 1 error rates, maintains high statistical power and fast signal detection, and provides considerably more accurate estimation than MaxSPRT. Given the extensiveness of the empirical study which yields more than 7 million sets of results, we present all results in a public R ShinyApp. As an effort to promote open science, we provide full implementation of our method in the open-source R package EvidenceSynthesis.


Assuntos
Sistemas de Notificação de Reações Adversas a Medicamentos , Vigilância de Produtos Comercializados , Vacinas , Humanos , Teorema de Bayes , Viés , Probabilidade , Vacinas/efeitos adversos
4.
J Am Med Inform Assoc ; 31(1): 209-219, 2023 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-37952118

RESUMO

OBJECTIVE: Health data standardized to a common data model (CDM) simplifies and facilitates research. This study examines the factors that make standardizing observational health data to the Observational Medical Outcomes Partnership (OMOP) CDM successful. MATERIALS AND METHODS: Twenty-five data partners (DPs) from 11 countries received funding from the European Health Data Evidence Network (EHDEN) to standardize their data. Three surveys, DataQualityDashboard results, and statistics from the conversion process were analyzed qualitatively and quantitatively. Our measures of success were the total number of days to transform source data into the OMOP CDM and participation in network research. RESULTS: The health data converted to CDM represented more than 133 million patients. 100%, 88%, and 84% of DPs took Surveys 1, 2, and 3. The median duration of the 6 key extract, transform, and load (ETL) processes ranged from 4 to 115 days. Of the 25 DPs, 21 DPs were considered applicable for analysis of which 52% standardized their data on time, and 48% participated in an international collaborative study. DISCUSSION: This study shows that the consistent workflow used by EHDEN proves appropriate to support the successful standardization of observational data across Europe. Over the 25 successful transformations, we confirmed that getting the right people for the ETL is critical and vocabulary mapping requires specific expertise and support of tools. Additionally, we learned that teams that proactively prepared for data governance issues were able to avoid considerable delays improving their ability to finish on time. CONCLUSION: This study provides guidance for future DPs to standardize to the OMOP CDM and participate in distributed networks. We demonstrate that the Observational Health Data Sciences and Informatics community must continue to evaluate and provide guidance and support for what ultimately develops the backbone of how community members generate evidence.


Assuntos
Saúde Global , Medicina , Humanos , Bases de Dados Factuais , Europa (Continente) , Registros Eletrônicos de Saúde
5.
BMJ Med ; 2(1): e000651, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37829182

RESUMO

Objective: To assess the uptake of second line antihyperglycaemic drugs among patients with type 2 diabetes mellitus who are receiving metformin. Design: Federated pharmacoepidemiological evaluation in LEGEND-T2DM. Setting: 10 US and seven non-US electronic health record and administrative claims databases in the Observational Health Data Sciences and Informatics network in eight countries from 2011 to the end of 2021. Participants: 4.8 million patients (≥18 years) across US and non-US based databases with type 2 diabetes mellitus who had received metformin monotherapy and had initiated second line treatments. Exposure: The exposure used to evaluate each database was calendar year trends, with the years in the study that were specific to each cohort. Main outcomes measures: The outcome was the incidence of second line antihyperglycaemic drug use (ie, glucagon-like peptide-1 receptor agonists, sodium-glucose cotransporter-2 inhibitors, dipeptidyl peptidase-4 inhibitors, and sulfonylureas) among individuals who were already receiving treatment with metformin. The relative drug class level uptake across cardiovascular risk groups was also evaluated. Results: 4.6 million patients were identified in US databases, 61 382 from Spain, 32 442 from Germany, 25 173 from the UK, 13 270 from France, 5580 from Scotland, 4614 from Hong Kong, and 2322 from Australia. During 2011-21, the combined proportional initiation of the cardioprotective antihyperglycaemic drugs (glucagon-like peptide-1 receptor agonists and sodium-glucose cotransporter-2 inhibitors) increased across all data sources, with the combined initiation of these drugs as second line drugs in 2021 ranging from 35.2% to 68.2% in the US databases, 15.4% in France, 34.7% in Spain, 50.1% in Germany, and 54.8% in Scotland. From 2016 to 2021, in some US and non-US databases, uptake of glucagon-like peptide-1 receptor agonists and sodium-glucose cotransporter-2 inhibitors increased more significantly among populations with no cardiovascular disease compared with patients with established cardiovascular disease. No data source provided evidence of a greater increase in the uptake of these two drug classes in populations with cardiovascular disease compared with no cardiovascular disease. Conclusions: Despite the increase in overall uptake of cardioprotective antihyperglycaemic drugs as second line treatments for type 2 diabetes mellitus, their uptake was lower in patients with cardiovascular disease than in people with no cardiovascular disease over the past decade. A strategy is needed to ensure that medication use is concordant with guideline recommendations to improve outcomes of patients with type 2 diabetes mellitus.

6.
J Biomed Inform ; 145: 104476, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37598737

RESUMO

OBJECTIVE: We developed and evaluated a novel one-shot distributed algorithm for evidence synthesis in distributed research networks with rare outcomes. MATERIALS AND METHODS: Fed-Padé, motivated by a classic mathematical tool, Padé approximants, reconstructs the multi-site data likelihood via Padé approximant whose key parameters can be computed distributively. Thanks to the simplicity of [2,2] Padé approximant, Fed-Padé requests an extremely simple task and low communication cost for data partners. Specifically, each data partner only needs to compute and share the log-likelihood and its first 4 gradients evaluated at an initial estimator. We evaluated the performance of our algorithm with extensive simulation studies and four observational healthcare databases. RESULTS: Our simulation studies revealed that a [2,2]-Padé approximant can well reconstruct the multi-site likelihood so that Fed-Padé produces nearly identical estimates to the pooled analysis. Across all simulation scenarios considered, the median of relative bias and rate of instability of our Fed-Padé are both <0.1%, whereas meta-analysis estimates have bias up to 50% and instability up to 75%. Furthermore, the confidence intervals derived from the Fed-Padé algorithm showed better coverage of the truth than confidence intervals based on the meta-analysis. In real data analysis, the Fed-Padé has a relative bias of <1% for all three comparisons for risks of acute liver injury and decreased libido, whereas the meta-analysis estimates have a substantially higher bias (around 10%). CONCLUSION: The Fed-Padé algorithm is nearly lossless, stable, communication-efficient, and easy to implement for models with rare outcomes. It provides an extremely suitable and convenient approach for synthesizing evidence in distributed research networks with rare outcomes.


Assuntos
Algoritmos , Aprendizado de Máquina , Simulação por Computador , Metanálise como Assunto
7.
Drug Saf ; 46(8): 797-807, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37328600

RESUMO

INTRODUCTION: Vaccine safety surveillance commonly includes a serial testing approach with a sensitive method for 'signal generation' and specific method for 'signal validation.' The extent to which serial testing in real-world studies improves or hinders overall performance in terms of sensitivity and specificity remains unknown. METHODS: We assessed the overall performance of serial testing using three administrative claims and one electronic health record database. We compared type I and II errors before and after empirical calibration for historical comparator, self-controlled case series (SCCS), and the serial combination of those designs against six vaccine exposure groups with 93 negative control and 279 imputed positive control outcomes. RESULTS: The historical comparator design mostly had fewer type II errors than SCCS. SCCS had fewer type I errors than the historical comparator. Before empirical calibration, the serial combination increased specificity and decreased sensitivity. Type II errors mostly exceeded 50%. After empirical calibration, type I errors returned to nominal; sensitivity was lowest when the methods were combined. CONCLUSION: While serial combination produced fewer false-positive signals compared with the most specific method, it generated more false-negative signals compared with the most sensitive method. Using a historical comparator design followed by an SCCS analysis yielded decreased sensitivity in evaluating safety signals relative to a one-stage SCCS approach. While the current use of serial testing in vaccine surveillance may provide a practical paradigm for signal identification and triage, single epidemiological designs should be explored as valuable approaches to detecting signals.


Assuntos
Vacinas , Humanos , Vacinas/efeitos adversos , Sensibilidade e Especificidade , Projetos de Pesquisa , Bases de Dados Factuais , Registros Eletrônicos de Saúde
8.
Stat Med ; 42(5): 619-631, 2023 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-36642826

RESUMO

Post-approval safety surveillance of medical products using observational healthcare data can help identify safety issues beyond those found in pre-approval trials. When testing sequentially as data accrue, maximum sequential probability ratio testing (MaxSPRT) is a common approach to maintaining nominal type 1 error. However, the true type 1 error may still deviate from the specified one because of systematic error due to the observational nature of the analysis. This systematic error may persist even after controlling for known confounders. Here we propose to address this issue by combing MaxSPRT with empirical calibration. In empirical calibration, we assume uncertainty about the systematic error in our analysis, the source of uncertainty commonly overlooked in practice. We infer a probability distribution of systematic error by relying on a large set of negative controls: exposure-outcome pairs where no causal effect is believed to exist. Integrating this distribution into our test statistics has previously been shown to restore type 1 error to nominal. Here we show how we can calibrate the critical value central to MaxSPRT. We evaluate this novel approach using simulations and real electronic health records, using H1N1 vaccinations during the 2009-2010 season as an example. Results show that combining empirical calibration with MaxSPRT restores nominal type 1 error. In our real-world example, adjusting for systematic error using empirical calibration has a larger impact than, and hence is just as essential as, adjusting for sequential testing using MaxSPRT. We recommend performing both, using the method described here.


Assuntos
Vírus da Influenza A Subtipo H1N1 , Humanos , Calibragem , Probabilidade , Atenção à Saúde , Registros Eletrônicos de Saúde
10.
Front Pharmacol ; 13: 945592, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36188566

RESUMO

Purpose: Alpha-1 blockers, often used to treat benign prostatic hyperplasia (BPH), have been hypothesized to prevent COVID-19 complications by minimising cytokine storm release. The proposed treatment based on this hypothesis currently lacks support from reliable real-world evidence, however. We leverage an international network of large-scale healthcare databases to generate comprehensive evidence in a transparent and reproducible manner. Methods: In this international cohort study, we deployed electronic health records from Spain (SIDIAP) and the United States (Department of Veterans Affairs, Columbia University Irving Medical Center, IQVIA OpenClaims, Optum DOD, Optum EHR). We assessed association between alpha-1 blocker use and risks of three COVID-19 outcomes-diagnosis, hospitalization, and hospitalization requiring intensive services-using a prevalent-user active-comparator design. We estimated hazard ratios using state-of-the-art techniques to minimize potential confounding, including large-scale propensity score matching/stratification and negative control calibration. We pooled database-specific estimates through random effects meta-analysis. Results: Our study overall included 2.6 and 0.46 million users of alpha-1 blockers and of alternative BPH medications. We observed no significant difference in their risks for any of the COVID-19 outcomes, with our meta-analytic HR estimates being 1.02 (95% CI: 0.92-1.13) for diagnosis, 1.00 (95% CI: 0.89-1.13) for hospitalization, and 1.15 (95% CI: 0.71-1.88) for hospitalization requiring intensive services. Conclusion: We found no evidence of the hypothesized reduction in risks of the COVID-19 outcomes from the prevalent-use of alpha-1 blockers-further research is needed to identify effective therapies for this novel disease.

11.
J Biomed Inform ; 134: 104204, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36108816

RESUMO

Confounding remains one of the major challenges to causal inference with observational data. This problem is paramount in medicine, where we would like to answer causal questions from large observational datasets like electronic health records (EHRs) and administrative claims. Modern medical data typically contain tens of thousands of covariates. Such a large set carries hope that many of the confounders are directly measured, and further hope that others are indirectly measured through their correlation with measured covariates. How can we exploit these large sets of covariates for causal inference? To help answer this question, this paper examines the performance of the large-scale propensity score (LSPS) approach on causal analysis of medical data. We demonstrate that LSPS may adjust for indirectly measured confounders by including tens of thousands of covariates that may be correlated with them. We present conditions under which LSPS removes bias due to indirectly measured confounders, and we show that LSPS may avoid bias when inadvertently adjusting for variables (like colliders) that otherwise can induce bias. We demonstrate the performance of LSPS with both simulated medical data and real medical data.


Assuntos
Fatores de Confusão Epidemiológicos , Viés , Causalidade , Pontuação de Propensão
12.
Front Pharmacol ; 13: 893484, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35873596

RESUMO

Background: Routinely collected healthcare data such as administrative claims and electronic health records (EHR) can complement clinical trials and spontaneous reports to detect previously unknown risks of vaccines, but uncertainty remains about the behavior of alternative epidemiologic designs to detect and declare a true risk early. Methods: Using three claims and one EHR database, we evaluate several variants of the case-control, comparative cohort, historical comparator, and self-controlled designs against historical vaccinations using real negative control outcomes (outcomes with no evidence to suggest that they could be caused by the vaccines) and simulated positive control outcomes. Results: Most methods show large type 1 error, often identifying false positive signals. The cohort method appears either positively or negatively biased, depending on the choice of comparator index date. Empirical calibration using effect-size estimates for negative control outcomes can bring type 1 error closer to nominal, often at the cost of increasing type 2 error. After calibration, the self-controlled case series (SCCS) design most rapidly detects small true effect sizes, while the historical comparator performs well for strong effects. Conclusion: When applying any method for vaccine safety surveillance we recommend considering the potential for systematic error, especially due to confounding, which for many designs appears to be substantial. Adjusting for age and sex alone is likely not sufficient to address differences between vaccinated and unvaccinated, and for the cohort method the choice of index date is important for the comparability of the groups. Analysis of negative control outcomes allows both quantification of the systematic error and, if desired, subsequent empirical calibration to restore type 1 error to its nominal value. In order to detect weaker signals, one may have to accept a higher type 1 error.

13.
Drug Saf ; 45(7): 791-807, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35810265

RESUMO

INTRODUCTION: Hip fractures among older people are a major public health issue, which can impact quality of life and increase mortality within the year after they occur. A recent observational study found an increased risk of hip fracture in subjects who were new users of tramadol compared with codeine. These drugs have somewhat different indications. Tramadol is indicated for moderate to severe pain and can be used for an extended period; codeine is indicated for mild to moderate pain and cough suppression. OBJECTIVE: In this observational study, we compared the risk of hip fracture in new users of tramadol or codeine, using multiple databases and analytical methods. METHODS: Using data from the Clinical Practice Research Datalink and three US claims databases, we compared the risk of hip fracture after exposure to tramadol or codeine in subjects aged 50-89 years. To ensure comparability, large-scale propensity scores were used to adjust for confounding. RESULTS: We observed a calibrated hazard ratio of 1.10 (95% calibrated confidence interval 0.99-1.21) in the Clinical Practice Research Datalink database, and a pooled estimate across the US databases yielded a calibrated hazard ratio of 1.06 (95% calibrated confidence interval 0.97-1.16). CONCLUSIONS: Our results did not demonstrate a statistically significant difference between subjects treated for pain with tramadol compared with codeine for the outcome of hip fracture risk.


Assuntos
Fraturas do Quadril , Tramadol , Idoso , Analgésicos Opioides/efeitos adversos , Codeína/efeitos adversos , Fraturas do Quadril/induzido quimicamente , Fraturas do Quadril/tratamento farmacológico , Fraturas do Quadril/epidemiologia , Humanos , Dor/tratamento farmacológico , Qualidade de Vida , Tramadol/efeitos adversos
14.
BMJ Open ; 12(6): e057977, 2022 06 09.
Artigo em Inglês | MEDLINE | ID: mdl-35680274

RESUMO

INTRODUCTION: Therapeutic options for type 2 diabetes mellitus (T2DM) have expanded over the last decade with the emergence of cardioprotective novel agents, but without such data for older drugs, leaving a critical gap in our understanding of the relative effects of T2DM agents on cardiovascular risk. METHODS AND ANALYSIS: The large-scale evidence generations across a network of databases for T2DM (LEGEND-T2DM) initiative is a series of systematic, large-scale, multinational, real-world comparative cardiovascular effectiveness and safety studies of all four major second-line anti-hyperglycaemic agents, including sodium-glucose co-transporter-2 inhibitor, glucagon-like peptide-1 receptor agonist, dipeptidyl peptidase-4 inhibitor and sulfonylureas. LEGEND-T2DM will leverage the Observational Health Data Sciences and Informatics (OHDSI) community that provides access to a global network of administrative claims and electronic health record data sources, representing 190 million patients in the USA and about 50 million internationally. LEGEND-T2DM will identify all adult, patients with T2DM who newly initiate a traditionally second-line T2DM agent. Using an active comparator, new-user cohort design, LEGEND-T2DM will execute all pairwise class-versus-class and drug-versus-drug comparisons in each data source, producing extensive study diagnostics that assess reliability and generalisability through cohort balance and equipoise to examine the relative risk of cardiovascular and safety outcomes. The primary cardiovascular outcomes include a composite of major adverse cardiovascular events and a series of safety outcomes. The study will pursue data-driven, large-scale propensity adjustment for measured confounding, a large set of negative control outcome experiments to address unmeasured and systematic bias. ETHICS AND DISSEMINATION: The study ensures data safety through a federated analytic approach and follows research best practices, including prespecification and full disclosure of results. LEGEND-T2DM is dedicated to open science and transparency and will publicly share all analytic code from reproducible cohort definitions through turn-key software, enabling other research groups to leverage our methods, data and results to verify and extend our findings.


Assuntos
Diabetes Mellitus Tipo 2 , Inibidores da Dipeptidil Peptidase IV , Inibidores do Transportador 2 de Sódio-Glicose , Adulto , Diabetes Mellitus Tipo 2/induzido quimicamente , Diabetes Mellitus Tipo 2/tratamento farmacológico , Inibidores da Dipeptidil Peptidase IV/uso terapêutico , Humanos , Hipoglicemiantes/efeitos adversos , Reprodutibilidade dos Testes , Inibidores do Transportador 2 de Sódio-Glicose/uso terapêutico , Compostos de Sulfonilureia/uso terapêutico
15.
J Am Med Inform Assoc ; 29(8): 1366-1371, 2022 07 12.
Artigo em Inglês | MEDLINE | ID: mdl-35579348

RESUMO

OBJECTIVE: To develop a lossless distributed algorithm for generalized linear mixed model (GLMM) with application to privacy-preserving hospital profiling. MATERIALS AND METHODS: The GLMM is often fitted to implement hospital profiling, using clinical or administrative claims data. Due to individual patient data (IPD) privacy regulations and the computational complexity of GLMM, a distributed algorithm for hospital profiling is needed. We develop a novel distributed penalized quasi-likelihood (dPQL) algorithm to fit GLMM when only aggregated data, rather than IPD, can be shared across hospitals. We also show that the standardized mortality rates, which are often reported as the results of hospital profiling, can also be calculated distributively without sharing IPD. We demonstrate the applicability of the proposed dPQL algorithm by ranking 929 hospitals for coronavirus disease 2019 (COVID-19) mortality or referral to hospice that have been previously studied. RESULTS: The proposed dPQL algorithm is mathematically proven to be lossless, that is, it obtains identical results as if IPD were pooled from all hospitals. In the example of hospital profiling regarding COVID-19 mortality, the dPQL algorithm reached convergence with only 5 iterations, and the estimation of fixed effects, random effects, and mortality rates were identical to that of the PQL from pooled data. CONCLUSION: The dPQL algorithm is lossless, privacy-preserving and fast-converging for fitting GLMM. It provides an extremely suitable and convenient distributed approach for hospital profiling.


Assuntos
COVID-19 , Privacidade , Algoritmos , Hospitais , Humanos , Funções Verossimilhança
16.
BMC Med Inform Decis Mak ; 22(1): 142, 2022 05 25.
Artigo em Inglês | MEDLINE | ID: mdl-35614485

RESUMO

BACKGROUND: Prognostic models that are accurate could help aid medical decision making. Large observational databases often contain temporal medical data for large and diverse populations of patients. It may be possible to learn prognostic models using the large observational data. Often the performance of a prognostic model undesirably worsens when transported to a different database (or into a clinical setting). In this study we investigate different ensemble approaches that combine prognostic models independently developed using different databases (a simple federated learning approach) to determine whether ensembles that combine models developed across databases can improve model transportability (perform better in new data than single database models)? METHODS: For a given prediction question we independently trained five single database models each using a different observational healthcare database. We then developed and investigated numerous ensemble models (fusion, stacking and mixture of experts) that combined the different database models. Performance of each model was investigated via discrimination and calibration using a leave one dataset out technique, i.e., hold out one database to use for validation and use the remaining four datasets for model development. The internal validation of a model developed using the hold out database was calculated and presented as the 'internal benchmark' for comparison. RESULTS: In this study the fusion ensembles generally outperformed the single database models when transported to a previously unseen database and the performances were more consistent across unseen databases. Stacking ensembles performed poorly in terms of discrimination when the labels in the unseen database were limited. Calibration was consistently poor when both ensembles and single database models were applied to previously unseen databases. CONCLUSION: A simple federated learning approach that implements ensemble techniques to combine models independently developed across different databases for the same prediction question may improve the discriminative performance in new data (new database or clinical setting) but will need to be recalibrated using the new data. This could help medical decision making by improving prognostic model performance.


Assuntos
Atenção à Saúde , Calibragem , Bases de Dados Factuais , Humanos , Prognóstico
17.
JMIR Public Health Surveill ; 8(6): e33099, 2022 06 17.
Artigo em Inglês | MEDLINE | ID: mdl-35482996

RESUMO

BACKGROUND: Observational data enables large-scale vaccine safety surveillance but requires careful evaluation of the potential sources of bias. One potential source of bias is the index date selection procedure for the unvaccinated cohort or unvaccinated comparison time ("anchoring"). OBJECTIVE: Here, we evaluated the different index date selection procedures for 2 vaccinations: COVID-19 and influenza. METHODS: For each vaccine, we extracted patient baseline characteristics on the index date and up to 450 days prior and then compared them to the characteristics of the unvaccinated patients indexed on (1) an arbitrary date or (2) a date of a visit. Additionally, we compared vaccinated patients indexed on the date of vaccination and the same patients indexed on a prior date or visit. RESULTS: COVID-19 vaccination and influenza vaccination differ drastically from each other in terms of the populations vaccinated and their status on the day of vaccination. When compared to indexing on a visit in the unvaccinated population, influenza vaccination had markedly higher covariate proportions, and COVID-19 vaccination had lower proportions of most covariates on the index date. In contrast, COVID-19 vaccination had similar covariate proportions when compared to an arbitrary date. These effects attenuated, but were still present, with a longer lookback period. The effect of day 0 was present even when the patients served as their own controls. CONCLUSIONS: Patient baseline characteristics are sensitive to the choice of the index date. In vaccine safety studies, unexposed index event should represent vaccination settings. Study designs previously used to assess influenza vaccination must be reassessed for COVID-19 to account for a potentially healthier population and lack of medical activity on the day of vaccination.


Assuntos
COVID-19 , Vacinas contra Influenza , Influenza Humana , COVID-19/epidemiologia , COVID-19/prevenção & controle , Vacinas contra COVID-19/efeitos adversos , Estudos de Coortes , Humanos , Vacinas contra Influenza/efeitos adversos , Influenza Humana/epidemiologia , Influenza Humana/prevenção & controle , Aceitação pelo Paciente de Cuidados de Saúde
18.
Front Pharmacol ; 13: 837632, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35392566

RESUMO

Post-marketing vaccine safety surveillance aims to detect adverse events following immunization in a population. Whether certain methods of surveillance are more precise and unbiased in generating safety signals is unclear. Here, we synthesized information from existing literature to provide an overview of the strengths, weaknesses, and clinical applications of epidemiologic and analytical methods used in vaccine monitoring, focusing on cohort, case-control and self-controlled designs. These designs are proposed to be evaluated in the EUMAEUS (Evaluating Use of Methods for Adverse Event Under Surveillance-for vaccines) study because of their widespread use and potential utility. Over the past decades, there have been an increasing number of epidemiological study designs used for vaccine safety surveillance. While traditional cohort and case-control study designs remain widely used, newer, novel designs such as the self-controlled case series and self-controlled risk intervals have been developed. Each study design comes with its strengths and limitations, and the most appropriate study design will depend on availability of resources, access to records, number and distribution of cases, and availability of population coverage data. Several assumptions have to be made while using the various study designs, and while the goal is to mitigate any biases, violations of these assumptions are often still present to varying degrees. In our review, we discussed some of the potential biases (i.e., selection bias, misclassification bias and confounding bias), and ways to mitigate them. While the types of epidemiological study designs are well established, a comprehensive comparison of the analytical aspects (including method evaluation and performance metrics) of these study designs are relatively less well studied. We summarized the literature, reporting on two simulation studies, which compared the detection time, empirical power, error rate and risk estimate bias across the above-mentioned study designs. While these simulation studies provided insights on the analytic performance of each of the study designs, its applicability to real-world data remains unclear. To bridge that gap, we provided the rationale of the EUMAEUS study, with a brief description of the study design; and how the use of real-world multi-database networks can provide insights into better methods evaluation and vaccine safety surveillance.

19.
Nat Commun ; 13(1): 1678, 2022 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-35354802

RESUMO

Linear mixed models are commonly used in healthcare-based association analyses for analyzing multi-site data with heterogeneous site-specific random effects. Due to regulations for protecting patients' privacy, sensitive individual patient data (IPD) typically cannot be shared across sites. We propose an algorithm for fitting distributed linear mixed models (DLMMs) without sharing IPD across sites. This algorithm achieves results identical to those achieved using pooled IPD from multiple sites (i.e., the same effect size and standard error estimates), hence demonstrating the lossless property. The algorithm requires each site to contribute minimal aggregated data in only one round of communication. We demonstrate the lossless property of the proposed DLMM algorithm by investigating the associations between demographic and clinical characteristics and length of hospital stay in COVID-19 patients using administrative claims from the UnitedHealth Group Clinical Discovery Database. We extend this association study by incorporating 120,609 COVID-19 patients from 11 collaborative data sources worldwide.


Assuntos
COVID-19 , Algoritmos , COVID-19/epidemiologia , Confidencialidade , Bases de Dados Factuais , Humanos , Modelos Lineares
20.
Stat Methods Med Res ; 31(3): 438-450, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34841975

RESUMO

Studies of the effects of medical interventions increasingly take place in distributed research settings using data from multiple clinical data sources including electronic health records and administrative claims. In such settings, privacy concerns typically prohibit sharing of individual patient data, and instead, cross-network analyses can only utilize summary statistics from the individual databases such as hazard ratios and standard errors. In the specific but very common context of the Cox proportional hazards model, we show that combining such per site summary statistics into a single network-wide estimate using standard meta-analysis methods leads to substantial bias when outcome counts are small. This bias derives primarily from the normal approximations of the per site likelihood that the methods utilized. Here we propose and evaluate methods that eschew normal approximations in favor of three more flexible approximations: a skew-normal, a one-dimensional grid, and a custom parametric function that mimics the behavior of the Cox likelihood function. In extensive simulation studies, we demonstrate how these approximations impact bias in the context of both fixed-effects and (Bayesian) random-effects models. We then apply these approaches to three real-world studies of the comparative safety of antidepressants, each using data from four observational health care databases.


Assuntos
Registros Eletrônicos de Saúde , Teorema de Bayes , Viés , Humanos , Funções Verossimilhança , Modelos de Riscos Proporcionais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...